uk researchers ¶
import pandas as pd
import holoviews as hv
import fastparquet as fp
from colorcet import fire
from datashader.bundling import directly_connect_edges, hammer_bundle
from holoviews.operation.datashader import datashade, dynspread
from holoviews.operation import decimate
from dask.distributed import Client
client = Client()
hv.notebook_extension('bokeh','matplotlib')
decimate.max_samples=20000
dynspread.threshold=0.01
datashade.cmap=fire[40:]
sz = dict(width=150,height=150)
%opts RGB [xaxis=None yaxis=None show_grid=False bgcolor="black"]
The files are stored in the efficient Parquet format:
r_nodes_file = '../data/calvert_uk_research2017_nodes.snappy.parq'
r_edges_file = '../data/calvert_uk_research2017_edges.snappy.parq'
r_nodes = hv.Points(fp.ParquetFile(r_nodes_file).to_pandas(index='id'), label="Nodes")
r_edges = hv.Curve( fp.ParquetFile(r_edges_file).to_pandas(index='id'), label="Edges")
len(r_nodes),len(r_edges)
We can render each collaboration as a single-line direct connection, but the result is a dense tangle:
%%opts RGB [tools=["hover"] width=400 height=400]
%time r_direct = hv.Curve(directly_connect_edges(r_nodes.data, r_edges.data),label="Direct")
dynspread(datashade(r_nodes,cmap=["cyan"])) + \
datashade(r_direct)
Detailed substructure of this graph becomes visible after bundling edges using a variant of Hurter, Ersoy, & Telea (ECV-2012) , which takes several minutes even using multiple cores with Dask :
%time r_bundled = hv.Curve(hammer_bundle(r_nodes.data, r_edges.data),label="Bundled")
%%opts RGB [tools=["hover"] width=400 height=400]
dynspread(datashade(r_nodes,cmap=["cyan"])) + datashade(r_bundled)
Zooming into these plots reveals interesting patterns (if you are running a live Python server), but immediately one then wants to ask what the various groupings of nodes might represent. With a small number of nodes or a small number of categories one could color-code the dots (using datashader's categorical color coding support), but here we just have thousands of indistinguishable dots. Instead, let's use hover information so the viewer can at least see the identity of each node on inspection.
To do that, we'll first need to pull in something useful to hover, so let's load the names of each institution in the researcher list and merge that with our existing layout data:
node_names = pd.read_csv("../data/calvert_uk_research2017_nodes.csv", index_col="node_id", usecols=["node_id","name"])
node_names = node_names.rename(columns={"name": "Institution"})
node_names
r_nodes_named = pd.merge(r_nodes.data, node_names, left_index=True, right_index=True)
r_nodes_named.tail()
We can now overlay a set of points on top of the datashaded edges, which will provide hover information for each node. Here, the entire set of 15000 nodes would be reasonably feasible to plot, but to show how to work with larger datasets we wrap the
hv.Points()
call with
decimate
so that only a finite subset of the points will be shown at any one time. If a node of interest is not visible in a particular zoom, then you can simply zoom in on that region; at some point the number of visible points will be below the specified decimate limit and the required point should be revealed.
%%opts Points (color="cyan") [tools=["hover"] width=900 height=650]
datashade(r_bundled, width=900, height=650) * \
decimate( hv.Points(r_nodes_named),max_samples=10000)
If you click around and hover, you should see interesting groups of nodes, and can then set up further interactive tools using HoloViews' stream support to reveal aspects relevant to your research interests or questions.
As you can see, datashader lets you work with very large graph datasets, though there are a number of decisions to make by trial and error, you do have to be careful when doing computationally expensive operations like edge bundling, and interactive information will only be available for a limited subset of the data at any one time due to data-size limitations of current web browsers.